23 research outputs found
Recommended from our members
Reinvigorating Human Rights in Internet Governance: The UDRP Procedure Through the Lens of International Human Rights Principles
An international legal framework for resolving disputes between trademark owners and domain name holders, the Uniform Domain Names Disputes Resolution Policy (“UDRP”), purports to address economic interests; however, fundamental human rights are indirectly implicated in the process (for example, the rights to freedom of expression and peaceful enjoyment of one’s property) or are ingrained within the procedure itself (such as the right to due process). The UDRP was created in 1998 by the Internet Corporation for Assigned Names and Numbers (“ICANN”), which has recently adopted in its organizational bylaws a “Core Value” of respecting “internationally recognized human rights.” In light of these institutional changes, in this Article, I chart the international human rights implications of the procedural aspects of the UDRP. I will show how the UDRP’s procedural elements raise numerous due process concerns regarding the deprivation of property rights, which are recognized in international human rights instruments, and make concrete proposals to improve procedural aspects of the policy in the upcoming UDRP review in 2020. To bring the UDRP procedure in line with “internationally recognized human rights,” the upcoming review should: (1) introduce a clear choice-of law clause in the UDRP; (2) develop uniform “Supplemental Rules” at ICANN level to increase uniformity and consistency of the UDRP system; (3) introduce a requirement to disclose and publish all UDRP decisions and statistics; (4) develop uniform standards for accreditation and selection of panelists; (5) require disclosure of conflicts of interest by panelists and Dispute Resolution Providers; (6) introduce regular comprehensive UDRP reviews; (7) reform the rules around communication, and the effectiveness of notice in particular; (8) establish an appeal procedure; and (9) explicitly acknowledge access to courts
Recommended from our members
Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State
Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion.
Recommended from our members
Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State
Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing. R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion.
Recommended from our members
Tackling food marketing to children in a digital world: trans-disciplinary perspectives. Children’s rights, evidence of impact, methodological challenges, regulatory options and policy implications for the WHO European Region
There is unequivocal evidence that childhood obesity is influenced by marketing of foods and non-alcoholic beverages high in saturated fat, salt and/or free sugars (HFSS), and a core recommendation of the WHO Commission on Ending Childhood Obesity is to reduce children’s exposure to all such marketing. As a result, WHO has called on Member States to introduce restrictions on marketing of HFSS foods to children, covering all media, including digital, and to close any regulatory loopholes. This publication provides up-to-date information on the marketing of foods and non-alcoholic beverages to children and the changes that have occurred in recent years, focusing in particular on the major shift to digital marketing. It examines trends in media use among children, marketing methods in the new digital media landscape and children’s engagement with such marketing. It also considers the impact on children and their ability to counter marketing as well as the implications for children’s rights and digital privacy. Finally the report discusses the policy implications and some of the recent policy action by WHO European Member States
Towards international data privacy cooperation : strategies and alternatives
Defence date: 11 June 2014Examining Board: Professor Giovanni Sartor, EUI (Supervisor); Professor Alexander Trechsel, EUI; Professor Lee A. Bygrave, University of Oslo; Dr. Christopher Kuner, Wilson Sonsini Goodrich & Rosati, Brussels.This thesis is a study of international data privacy cooperation, which has a twofold aim. Firstly, on the empirical level, it investigates the strategies and alternative paradigms for international data privacy cooperation and reveals the potential as well as the limits of different cooperative models. Secondly, on the theoretical level, it relies on a series of analytical constructs from legal and political theory to critically assess various aspects of cooperation and aims to contribute to the further advancement of those methodological frameworks. To accomplish these goals, the thesis adopts a multidisciplinary approach of a newly emerging 'international relations and international law' sub-discipline. Firstly, by relying on rational choice, historical institutionalism and game theory, it examines how different countries have been responding to the continuously evolving data privacy policy challenges, and how these divergent responses created a cooperative stalemate among the leading regulatory states. Secondly, it scrutinizes different approaches to customary international law to explore whether data privacy could be considered a principle of customary international law, which would make the mass-surveillance programmes illegal irrespective of the (non)existence of a binding international data privacy treaty. Then, by relying on international agreement design literature, the thesis explores the different 'hard law dreams' for data privacy and argues that these options are difficult to realize in practice. As an alternative, it adopts a global legal pluralism lens to examine the promise of mutual recognition arrangements; and applies trans-governmental networks theory to see whether data privacy commissioners could improve international cooperation. The thesis suggests that effective international data privacy cooperation requires a certain degree of political commitment from states as well as support from private actors. These preconditions do not currently exist. Persisting uncertainty stemming from lack of effective cooperative structures is more acute than ever in a modern world that has extended far beyond the Orwellian 'Big Brother' to cover the 'Facebook Universe,' 'Google World,' and unimaginable mass-surveillance by the Western governments
“Transparency Washing” in the Digital Age: A Corporate Agenda of Procedural Fetishism
Contemporary discourse on the regulation and governance of the digital environment has often focused on the procedural value of transparency. This article traces the prominence of the concept of transparency in contemporary regulatory debates to the corporate agenda of technology companies. Looking at the latest transparency initiatives of IBM, Google and Facebook, I introduce the concept of “transparency washing,” whereby a focus on transparency acts as an obfuscation and redirection from more substantive and fundamental questions about the concentration of power, substantial policies, and actions of technology behemoths. While the “ethics washing” of the tech giants has become widely acknowledged, “transparency washing” presents a wider critique of corporate discourse and neoliberal governmentality based on procedural fetishism, which detracts from the questions of substantial accountability and obligations by diverting the attention to procedural micro-issues that have little chance of changing the political or legal status quo